First we CLIP interrogate, using AI to describe the images that we want to use to train the model. Then we review and edit the description as we want to prepare for training.
Using pictures of vodafone adds now we extract the AI interpreation of it and add our own prompts to describe the iamge
Once we are done we train the LoRa:

accelerate launch --num_cpu_threads_per_process=2 "/mnt/c/Program Files/kohya_ss/sd-scripts/sdxl_train.py"  --bucket_no_upscale --bucket_reso_steps=64 --cache_latents --cache_latents_to_disk --caption_extension=".txt" --enable_bucket --min_bucket_reso=256 --max_bucket_reso=2048 --gradient_checkpointing --learning_rate="0.0003" --learning_rate_te1="0.0003" --learning_rate_te2="0.0003" --logging_dir="/mnt/c/Users/ardke/OneDrive/Desktop/vodafone_train_LoRa/New folder/log" --lr_scheduler="cosine" --lr_scheduler_num_cycles="10" --max_data_loader_n_workers="0" --resolution="768,768" --max_train_steps="4000" --mixed_precision="fp16" --optimizer_args scale_parameter=False relative_step=False warmup_init=False --optimizer_type="Adafactor" --output_dir="/mnt/c/Users/ardke/OneDrive/Desktop/vodafone_train_LoRa/New folder/model" --output_name="vodafone-family" --pretrained_model_name_or_path="/mnt/c/Program Files/ComfyUI_windows_portable_nvidia_cu121_or_cpu/ComfyUI_windows_portable/ComfyUI/models/checkpoints/sd_xl_base_1.0.safetensors" --reg_data_dir="/mnt/c/Users/ardke/OneDrive/Desktop/vodafone_train_LoRa/New folder/reg" --save_every_n_epochs="1" --save_model_as=safetensors --save_precision="fp16" --train_batch_size="3" --train_data_dir="/mnt/c/Users/ardke/OneDrive/Desktop/vodafone_train_LoRa/New folder/img" --xformers
Training LoRa
Published:

Training LoRa

Published:

Tools

Creative Fields